Though Robert Boyle called final causes one of the most important subjects for a natural philosopher to study, his own treatise on the subject, the Disquisition about Final Causes, has received comparatively little scholarly attention. In this paper, I explicate Boyle's complex argument against the use of teleological explanations for inanimate bodies, such as metals. The central object of this argument is a mysterious allusion to a silver plant. I claim that the silver plant is best understood as a reference to alchemical product the Arbor Dianae, an offshoot of George Starkey's recipe for the Philosophers' Stone. Then, I show how the context of alchemy not only clarifies Boyle's argument but also places it within a wider dialectic about matter and teleology. I then contrast the parallel arguments of Boyle and John Ray on the question of whether metals have divine purposes and show that the difference is explained by Boyle's belief in the transmutation of metals.The microscopic explanation of the physical phenomena represented by a macroscopic theory is often cast in terms of the reduction of the latter to a more fundamental theory, which represents the same phenomena at the microscopic level, albeit in an idealized way. In particular, the reduction of thermodynamics to statistical mechanics is a much discussed case-study in philosophy of physics. Based on the Generalized Nagel-Schaffner model, the alleged reductive explanation would be accomplished if one finds a corrected version of classical thermodynamics that can be strictly derived from statistical mechanics. That is the sense in which, according to Callender (1999, 2001), one should not take thermodynamics too seriously. Arguably, the sought-after revision is given by statistical thermodynamics, intended as a macroscopic theory equipped with a probabilistic law of equilibrium fluctuations. The present paper aims to evaluate this proposal. The upshot is that, while statistical thermodynamics enables one to re-define equilibrium so as to agree with Boltzmann entropy, it does not provide a definitive solution to the problem of explaining macroscopic irreversibility from a microscopic point of view.Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable-i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn's analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory 'acceptance', including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness-especially by exaggerating the impact of more present-oriented virtues, or failing to stress how 'competing' theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues.To-date, the most elaborated attempt to complete quantum mechanics by the addition of hidden variables is the de Broglie-Bohm (pilot wave) theory (dBBT). It endows particles with definite positions at all times. Their evolution is governed by a deterministic dynamics. By construction, however, the individual particle trajectories generically defy detectability in principle. https://www.selleckchem.com/products/dynasore.html Of late, this lore might seem to have been called into question in light of so-called weak measurements. Due to their characteristic weak coupling between the measurement device and the system under study, they permit the experimental probing of quantum systems without essentially disturbing them. It is natural therefore to think that weak measurements of velocity in particular offer to actually observe the particle trajectories. If true, such a claim would not only experimentally demonstrate the incompleteness of quantum mechanics it would provide support of dBBT in its standard form, singling it out from an infinitude of empirically equivalent alternative choices for the particle dynamics. Here we examine this possibility. Our result is deflationary weak velocity measurements constitute no new arguments, let alone empirical evidence, in favour of standard dBBT; One must not naïvely identify weak and actual positions. Weak velocity measurements admit of a straightforward standard quantum mechanical interpretation, independent of any commitment to particle trajectories and velocities. This is revealed by a careful reconstruction of the physical arguments on which the description of weak velocity measurements rests. It turns out that for weak velocity measurements to be reliable, one must already presuppose dBBT in its standard form in this sense, they can provide no new argument, empirical or otherwise, for dBBT and its standard guidance equation.Measurement results depend upon assumptions, and some of those assumptions are theoretical in character. This paper examines particle physics measurements in which a measurement result depends upon a type of assumption for which that very same result may be evidentially relevant, thus raising a worry about potential circularity in argumentation. We demonstrate how the practice of evaluating measurement uncertainty serves to render any such evidential circularity epistemically benign. Our analysis shows how the evaluation and deployment of uncertainty evaluation constitutes an in practice solution to a particular form of Duhemian underdetermination that improves upon Duhem's vague notion of "good sense," avoids holism, and reconciles theory dependence of measurement with piecemeal hypothesis testing.